116 research outputs found

    A Fixed-Point Algorithm for Closed Queueing Networks

    Get PDF
    In this paper we propose a new efficient iterative scheme for solving closed queueing networks with phase-type service time distributions. The method is especially efficient and accurate in case of large numbers of nodes and large customer populations. We present the method, put it in perspective, and validate it through a large number of test scenarios. In most cases, the method provides accuracies within 5% relative error (in comparison to discrete-event simulation)

    SSHCure: a flow-based SSH intrusion detection system

    Get PDF
    SSH attacks are a main area of concern for network managers, due to the danger associated with a successful compromise. Detecting these attacks, and possibly compromised victims, is therefore a crucial activity. Most existing network intrusion detection systems designed for this purpose rely on the inspection of individual packets and, hence, do not scale to today's high-speed networks. To overcome this issue, this paper proposes SSHCure, a flow-based intrusion detection system for SSH attacks. It employs an efficient algorithm for the real-time detection of ongoing attacks and allows identification of compromised attack targets. A prototype implementation of the algorithm, including a graphical user interface, is implemented as a plugin for the popular NfSen monitoring tool. Finally, the detection performance of the system is validated with empirical traffic data

    Internet Bad Neighborhoods: the Spam Case

    Get PDF

    Simpleweb/University of Twente Traffic Traces Data Repository

    Get PDF
    The computer networks research community lacks of shared measurement information. As a consequence, most researchers need to expend a considerable part of their time planning and executing measurements before being able to perform their studies. The lack of shared data also makes it hard to compare and validate results. This report describes our efforts to distribute a portion of our network data through the Simpleweb/University of Twente Traffic Traces Data Repository

    Improving Performance of QUIC in WiFi

    Get PDF
    QUIC is a new transport protocol under standardization since 2016. Initially developed by Google as an experiment, the protocol is already deployed in large-scale, thanks to its support in Chromium and Google's servers. In this paper we experimentally analyze the performance of QUIC in WiFi networks. We perform experiments using both a controlled WiFi testbed and a production WiFi mesh network. In particular, we study how QUIC interplays with MAC layer features such as IEEE 802.11 frame aggregation. We show that the current implementation of QUIC in Chromium achieves sub-optimal throughput in wireless networks. Indeed, burstiness in modern WiFi standards may improve network performance, and we show that a Bursty QUIC (BQUIC), i.e., a customized version of QUIC that is targeted to increase its burstiness, can achieve better performance in WiFi. BQUIC outperforms the current version of QUIC in WiFi, with throughput gains ranging between 20% to 30%

    Fitting World-Wide Web Request Traces with the EM-Algorithm

    Get PDF
    In recent years, several studies have shown that network traffic exhibits the property of self-similarity. Traditional (Poissonian) modelling approaches have been shown not to be able to describe this property and generally lead to the underestimation of interesting performance measures. Crovella and Bestavros have shown that network traffic that is due to World Wide Web transfers shows characteristics of self-similarity and they argue that this can be explained by the heavy-tailedness of many of the involved distributions. Considering these facts, developing methods which are able to handle self-similarity and heavy-tailedness is of great importance for network capacity planing purposes. In this paper we discuss two methods to fit hyper-exponential distributions to data sets which exhibit heavy-tails. One method is taken from the literature and shown to fall short. The other, new method, is shown to perform well in a number of case studies

    Internet Bad Neighborhoods Aggregation

    Get PDF
    Internet Bad Neighborhoods have proven to be an innovative approach for fighting spam. They have also helped to understand how spammers are distributed on the Internet. In our previous works, the size of each bad neighborhood was fixed to a /24 subnetwork. In this paper, however, we investigate if it is feasible to aggregate Internet bad neighborhoods not only at /24, but to any network prefix. To do that, we propose two different aggregation strategies: fixed prefix and variable prefix. The motivation for doing that is to reduce the number of entries in the bad neighborhood list, thus reducing memory storage requirements for intrusion detection solutions. We also introduce two error measures that allow to quantify how much error was incurred by the aggregation process. An evaluation of both strategies was conducted by analyzing real world data in our aggregation prototype

    Attacks by “Anonymous” WikiLeaks Proponents not Anonymous

    Get PDF
    On November 28, 2010, the world started watching the whistle blower website WikiLeaks to begin publishing part of the 250,000 US Embassy Diplomatic cables. These confidential cables provide an insight on U.S. international affairs from 274 different embassies, covering topics such as analysis of host countries and leaders and even requests for spying out United Nations leaders.\ud The release of these cables has caused reactions not only in the real world, but also on the Internet. In fact, a cyberwar started just before the initial release. Wikileaks has reported that their servers were experiencing distributed denial-of-service attacks (DDoS). A DDoS attack consists of many computers trying to overload a server by firing a high number of requests, leading ultimately to service disruption. In this case, the goal was to avoid the release of the embassy cables.\ud After the initial cable release, several companies started severed ties with WikiLeaks. One of the first was Amazon.com, that removed the WikiLeaks web- site from their servers. Next, EveryDNS, a company in which the domain wikileaks.org was registered, dropped the domain entries from its servers. On December 4th, PayPal cancelled the account that WikiLeaks was using to receive on-line donations. On the 6th, Swiss bank PostFinance froze the WikiLeaks assets and Mastercard stopped receiving payments to the WikiLeaks account. Visa followed Mastercard on December 7th.\ud These reactions caused a group of Internet activists (or “hacktivists”) named Anonymous to start a retaliation against PostFinance, PayPay, MasterCard, Visa, Moneybrookers.com and Amazon.com, named “Operation Payback”. The retaliation was performed as DDoS attacks to the websites of those companies, disrupting their activities (except for the case of Amazon.com) for different periods of time.\ud The Anonymous group consists of volunteers that use a stress testing tool to perform the attacks. This tool, named LOIC (Low Orbit Ion Cannon), can be found both as a desktop application and as a Web page.\ud Even though the group behind the attacks claims to be anonymous, the tools they provide do not offer any security services, such as anonymization. As a consequence, a hacktivist that volunteers to take part in such attacks, can be traced back easily. This is the case for both current versions of the LOIC tool. Therefore, the goal of this report is to present an analysis of privacy issues in the context of these attacks, and raise awareness on the risks of taking part in them
    • …
    corecore